Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37665699

ABSTRACT

Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in 2-D fetal ultrasound (US) images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix (MC), and others) and brain anatomical structures [trans-thalamic (TT), trans-cerebellum (TC), trans-ventricular (TV), and non-brain (NB)]. Our proposed architecture relies on a transformer-based approach that leverages spatial and global features using a newly designed residual cross-variance attention block. This block introduces an advanced cross-covariance attention (XCA) mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12 400 images from 1792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively.


Subject(s)
Brain , Ultrasonography, Prenatal , Female , Pregnancy , Humans , Brain/diagnostic imaging , Ultrasonography , Electric Power Supplies , Femur
2.
Cancers (Basel) ; 14(16)2022 Aug 13.
Article in English | MEDLINE | ID: mdl-36010903

ABSTRACT

In this article, we propose ICOSeg, a lightweight deep learning model that accurately segments the immune-checkpoint biomarker, Inducible T-cell COStimulator (ICOS) protein in colon cancer from immunohistochemistry (IHC) slide patches. The proposed model relies on the MobileViT network that includes two main components: convolutional neural network (CNN) layers for extracting spatial features; and a transformer block for capturing a global feature representation from IHC patch images. The ICOSeg uses an encoder and decoder sub-network. The encoder extracts the positive cell's salient features (i.e., shape, texture, intensity, and margin), and the decoder reconstructs important features into segmentation maps. To improve the model generalization capabilities, we adopted a channel attention mechanism that added to the bottleneck of the encoder layer. This approach highlighted the most relevant cell structures by discriminating between the targeted cell and background tissues. We performed extensive experiments on our in-house dataset. The experimental results confirm that the proposed model achieves more significant results against state-of-the-art methods, together with an 8× reduction in parameters.

3.
Diagnostics (Basel) ; 13(1)2022 Dec 29.
Article in English | MEDLINE | ID: mdl-36611396

ABSTRACT

Medical image analysis methods for mammograms, ultrasound, and magnetic resonance imaging (MRI) cannot provide the underline features on the cellular level to understand the cancer microenvironment which makes them unsuitable for breast cancer subtype classification study. In this paper, we propose a convolutional neural network (CNN)-based breast cancer classification method for hematoxylin and eosin (H&E) whole slide images (WSIs). The proposed method incorporates fused mobile inverted bottleneck convolutions (FMB-Conv) and mobile inverted bottleneck convolutions (MBConv) with a dual squeeze and excitation (DSE) network to accurately classify breast cancer tissue into binary (benign and malignant) and eight subtypes using histopathology images. For that, a pre-trained EfficientNetV2 network is used as a backbone with a modified DSE block that combines the spatial and channel-wise squeeze and excitation layers to highlight important low-level and high-level abstract features. Our method outperformed ResNet101, InceptionResNetV2, and EfficientNetV2 networks on the publicly available BreakHis dataset for the binary and multi-class breast cancer classification in terms of precision, recall, and F1-score on multiple magnification levels.

4.
Cancers (Basel) ; 13(15)2021 Jul 29.
Article in English | MEDLINE | ID: mdl-34359723

ABSTRACT

Biomarkers identify patient response to therapy. The potential immune-checkpoint biomarker, Inducible T-cell COStimulator (ICOS), expressed on regulating T-cell activation and involved in adaptive immune responses, is of great interest. We have previously shown that open-source software for digital pathology image analysis can be used to detect and quantify ICOS using cell detection algorithms based on traditional image processing techniques. Currently, artificial intelligence (AI) based on deep learning methods is significantly impacting the domain of digital pathology, including the quantification of biomarkers. In this study, we propose a general AI-based workflow for applying deep learning to the problem of cell segmentation/detection in IHC slides as a basis for quantifying nuclear staining biomarkers, such as ICOS. It consists of two main parts: a simplified but robust annotation process, and cell segmentation/detection models. This results in an optimised annotation process with a new user-friendly tool that can interact with1 other open-source software and assists pathologists and scientists in creating and exporting data for deep learning. We present a set of architectures for cell-based segmentation/detection to quantify and analyse the trade-offs between them, proving to be more accurate and less time consuming than traditional methods. This approach can identify the best tool to deliver the prognostic significance of ICOS protein expression.

5.
IEEE J Biomed Health Inform ; 24(3): 866-877, 2020 03.
Article in English | MEDLINE | ID: mdl-31199277

ABSTRACT

Recent studies have shown that the environment where people eat can affect their nutritional behavior [1]. In this paper, we provide automatic tools for personalized analysis of a person's health habits by the examination of daily recorded egocentric photo-streams. Specifically, we propose a new automatic approach for the classification of food-related environments, that is able to classify up to 15 such scenes. In this way, people can monitor the context around their food intake in order to get an objective insight into their daily eating routine. We propose a model that classifies food-related scenes organized in a semantic hierarchy. Additionally, we present and make available a new egocentric dataset composed of more than 33 000 images recorded by a wearable camera, over which our proposed model has been tested. Our approach obtains an accuracy and F-score of 56% and 65%, respectively, clearly outperforming the baseline methods.


Subject(s)
Food/classification , Image Processing, Computer-Assisted/methods , Photography/classification , Algorithms , Humans , Life Style , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...